Stochastic Gradient MCMC for State Space Models
نویسندگان
چکیده
منابع مشابه
MCMC for State Space Models
In this chapter we look at MCMC methods for a class of time-series models, called statespace models. The idea of state-space models is that there is an unobserved state of interest the evolves through time, and that partial observations of the state are made at successive time-points. We will denote the state by X and observations by Y , and assume that our state space model has the following s...
متن کاملDistributed Stochastic Gradient MCMC
Probabilistic inference on a big data scale is becoming increasingly relevant to both the machine learning and statistics communities. Here we introduce the first fully distributed MCMC algorithm based on stochastic gradients. We argue that stochastic gradient MCMC algorithms are particularly suited for distributed inference because individual chains can draw mini-batches from their local pool ...
متن کاملStochastic Gradient MCMC Methods for Hidden Markov Models
Stochastic gradient MCMC (SG-MCMC) algorithms have proven useful in scaling Bayesian inference to large datasets under an assumption of i.i.d data. We instead develop an SGMCMC algorithm to learn the parameters of hidden Markov models (HMMs) for time-dependent data. There are two challenges to applying SGMCMC in this setting: The latent discrete states, and needing to break dependencies when co...
متن کاملCPSG-MCMC: Clustering-Based Preprocessing method for Stochastic Gradient MCMC
In recent years, stochastic gradient Markov Chain Monte Carlo (SG-MCMC) methods have been raised to process large-scale dataset by iterative learning from small minibatches. However, the high variance caused by naive subsampling usually slows down the convergence to the desired posterior distribution. In this paper, we propose an effective subsampling strategy to reduce the variance based on a ...
متن کاملA Complete Recipe for Stochastic Gradient MCMC
Many recent Markov chain Monte Carlo (MCMC) samplers leverage continuous dynamics to define a transition kernel that efficiently explores a target distribution. In tandem, a focus has been on devising scalable variants that subsample the data and use stochastic gradients in place of full-data gradients in the dynamic simulations. However, such stochastic gradient MCMC samplers have lagged behin...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: SIAM Journal on Mathematics of Data Science
سال: 2019
ISSN: 2577-0187
DOI: 10.1137/18m1214780